Goto

Collaborating Authors

 ann method


Approximate Nearest Neighbour Search on Dynamic Datasets: An Investigation

Harwood, Ben, Dezfouli, Amir, Chades, Iadine, Sanderson, Conrad

arXiv.org Artificial Intelligence

Approximate k-Nearest Neighbour (ANN) methods are often used for mining information and aiding machine learning on large scale high-dimensional datasets. ANN methods typically differ in the index structure used for accelerating searches, resulting in various recall/runtime trade-off points. For applications with static datasets, runtime constraints and dataset properties can be used to empirically select an ANN method with suitable operating characteristics. However, for applications with dynamic datasets, which are subject to frequent online changes (like addition of new samples), there is currently no consensus as to which ANN methods are most suitable. Traditional evaluation approaches do not consider the computational costs of updating the index structure, as well as the rate and size of index updates. To address this, we empirically evaluate 5 popular ANN methods on two main applications (online data collection and online feature learning) while taking into account these considerations. Two dynamic datasets are used, derived from the SIFT1M dataset with 1 million samples and the DEEP1B dataset with 1 billion samples. The results indicate that the often used k-d trees method is not suitable on dynamic datasets as it is slower than a straightforward baseline exhaustive search method. For online data collection, the Hierarchical Navigable Small World Graphs method achieves a consistent speedup over baseline across a wide range of recall rates. For online feature learning, the Scalable Nearest Neighbours method is faster than baseline for recall rates below 75%.


Deep Calibration With Artificial Neural Network: A Performance Comparison on Option Pricing Models

Kim, Young Shin, Kim, Hyangju, Choi, Jaehyung

arXiv.org Artificial Intelligence

Since the seminal work of Black and Scholes (1973) and Merton (1973), the Black-Scholes model has remained the most fundamental model for option pricing. However, its restrictive assumptions, such as constant volatility or Geometric Brownian Motion (GBM), have been criticized for not reflecting the empirical characteristics of financial markets. Many subsequent models have since been proposed to relax the assumptions of the Black-Scholes model. One successful approach is employing stochastic volatility under the generalized autoregressive conditional heteroskedastic (GARCH) framework. The early attempt was introduced by Engle and Mustafa (1992) focusing on implied conditional volatilities. Subsequently, Duan (1995) developed a more rigorous framework of the GARCH option pricing model using the locally risk-neutral valuation relationship that one-period ahead conditional variance remains constant under both the risk-neutral measure and the physical measure.